830 research outputs found

    Ordinal learning for emotion recognition in customer service calls

    Get PDF

    Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments

    Get PDF
    Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances

    B7-H4 Polymorphism Influences the Prevalence of Diabetes Mellitus and Pro-Atherogenic Dyslipidemia in Patients with Psoriasis.

    Get PDF
    BACKGROUND The co-inhibitory molecule B7-H4 is located in the genomic regions associated with type 1 diabetes (T1D) susceptibility. However, the correlation of B7-H4 with glycometabolism and dyslipidemia has never been studied. OBJECTIVE To explore the influence of B7-H4 polymorphism on the prevalence of diabetes mellitus (DM) and dyslipidemia in psoriasis. METHODS In this single-center cross-sectional study, we recruited 265 psoriatic patients receiving methotrexate (MTX) treatment. Thirteen single-nucleotide polymorphisms (SNPs) in B7-H4 were genotyped. Serum levels of total cholesterol (TC), triglycerides (TG), lipoprotein (a) (LP(a)), high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein (LDL), apolipoprotein A1 (ApoA1), and apolipoprotein B (ApoB) were measured at baseline and week 12. RESULTS The GG genotype carriers of rs12025144 in B7-H4 had a higher prevalence of DM (57.14% vs. 17.71% vs. 18.67%, p = 0.0018), and had a poorer response to MTX in diabetic patients (p < 0.05), compared with AA or AG genotype carriers. The AG genotype of rs2066398 was associated with higher levels of pro-atherogenic lipids. MTX significantly downregulated the level of anti-atherogenic lipid ApoA1 in AA genotype carriers of rs2066398. CONCLUSIONS The genotypes rs12025144 and rs2066398 in B7-H4 were correlated with a higher prevalence of DM and dyslipidemia in psoriasis, respectively

    Analysis of beauvericin in Isaria cicadae Miquel culture on different media

    Get PDF
    Objective To analyze beauvericin and enniatins concentrations in the culture complex after Isaria cicadae Miquel was inoculated on different media for certain time. Methods One strain of Isaria cicadae Miquel was inoculated on 4 kinds of liquid culture media and 4 kinds of solid culture media. After incubation for 1-7 weeks under 25 ℃, high performance liquid chromatography tandem mass spectrometry was employed to detect beauvericin and enniatins. Results The detection rates of beauvericin in Isaria cicadae Miquel culture on 4 kinds of liquid media ranged between 42.9%-100.0%, and the beauvericin concentrations varied from 1.0 to 94.6 μg/L. The positive rates of beauvericin in Isaria cicadae Miquel culture on 4 kinds of solid media were 100.0%, and the beauvericin concentrations ranged from 60.9 to 44 677.5 μg/kg. Enniatins were negative for Isaria cicadae Miquel on 8 kinds of culture media. Conclusion The strain of Isaria cicadae Miquel used in our study could produce beauvericin in 8 kinds of culture media after incubation for certain time, and the beauvericin concentrations in solid media were much higher than in liquid media. The strain of Isaria cicadae Miquel didn’t produce enniatins in 8 kinds of culture media after incubation for certain time

    Parameter-Efficient Conformers via Sharing Sparsely-Gated Experts for End-to-End Speech Recognition

    Full text link
    While transformers and their variant conformers show promising performance in speech recognition, the parameterized property leads to much memory cost during training and inference. Some works use cross-layer weight-sharing to reduce the parameters of the model. However, the inevitable loss of capacity harms the model performance. To address this issue, this paper proposes a parameter-efficient conformer via sharing sparsely-gated experts. Specifically, we use sparsely-gated mixture-of-experts (MoE) to extend the capacity of a conformer block without increasing computation. Then, the parameters of the grouped conformer blocks are shared so that the number of parameters is reduced. Next, to ensure the shared blocks with the flexibility of adapting representations at different levels, we design the MoE routers and normalization individually. Moreover, we use knowledge distillation to further improve the performance. Experimental results show that the proposed model achieves competitive performance with 1/3 of the parameters of the encoder, compared with the full-parameter model.Comment: accepted in INTERSPEECH 202

    Demystifying Assumptions in Learning to Discover Novel Classes

    Full text link
    In learning to discover novel classes (L2DNC), we are given labeled data from seen classes and unlabeled data from unseen classes, and we train clustering models for the unseen classes. However, the rigorous definition of L2DNC is unexplored, which results in that its implicit assumptions are still unclear. In this paper, we demystify assumptions behind L2DNC and find that high-level semantic features should be shared among the seen and unseen classes. This naturally motivates us to link L2DNC to meta-learning that has exactly the same assumption as L2DNC. Based on this finding, L2DNC is not only theoretically solvable, but can also be empirically solved by meta-learning algorithms after slight modifications. This L2DNC methodology significantly reduces the amount of unlabeled data needed for training and makes it more practical, as demonstrated in experiments. The use of very limited data is also justified by the application scenario of L2DNC: since it is unnatural to label only seen-class data, L2DNC is sampling instead of labeling in causality. Therefore, unseen-class data should be collected on the way of collecting seen-class data, which is why they are novel and first need to be clustered

    TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation

    Full text link
    In few-shot domain adaptation (FDA), classifiers for the target domain are trained with accessible labeled data in the source domain (SD) and few labeled data in the target domain (TD). However, data usually contain private information in the current era, e.g., data distributed on personal phones. Thus, the private information will be leaked if we directly access data in SD to train a target-domain classifier (required by FDA methods). In this paper, to thoroughly prevent the privacy leakage in SD, we consider a very challenging problem setting, where the classifier for the TD has to be trained using few labeled target data and a well-trained SD classifier, named few-shot hypothesis adaptation (FHA). In FHA, we cannot access data in SD, as a result, the private information in SD will be protected well. To this end, we propose a target orientated hypothesis adaptation network (TOHAN) to solve the FHA problem, where we generate highly-compatible unlabeled data (i.e., an intermediate domain) to help train a target-domain classifier. TOHAN maintains two deep networks simultaneously, where one focuses on learning an intermediate domain and the other takes care of the intermediate-to-target distributional adaptation and the target-risk minimization. Experimental results show that TOHAN outperforms competitive baselines significantly
    • …
    corecore